(a) (b)
a) A data set for classification using the orthogonal partitioning rules. (b) The
rthogonal decision tree.
ver, if the oblique partitioning rule is employed, a decision tree
much simpler and more efficient. For instance, for the same data
own in Figure 3.48(a), one oblique partitioning rule can be used
te a perfect partition leading to two subspaces, each of which is
one class. Figure 3.48(b) shows such a decision tree, where only
tioning rule is employed. It is no doubt that a decision tree
d using the oblique division approach can generate a more
ious and efficient tree structure, and perhaps with a better
nce.
(a) (b)
(a) A data set for classification using an oblique partitioning rule. (b) The
decision tree.
basic principle of the random forest algorithm is to avoid
g a classifier through creating many smaller trees [Ho, 1998,
al., 2001]. Unlike a decision tree model, a random forest model
oblique hyperplanes which can have better performance with
es as the example shown in Figure 3.48. The random forest